Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Incomplete instance guided aeroengine blade instance segmentation
Rui HUANG, Chaoqun ZHANG, Xuyi CHENG, Yan XING, Bao ZHANG
Journal of Computer Applications    2024, 44 (1): 167-174.   DOI: 10.11772/j.issn.1001-9081.2023010037
Abstract140)   HTML4)    PDF (4546KB)(51)       Save

The current deep learning based instance segmentation methods cannot fully train the network model and result in sub-optimal segmentation results due to the lack of labeled engine blade data. To improve the precision of aeroengine blade instance segmentation, an aeroengine blade instance segmentation method based on incomplete instance guidance was proposed. Combining with an existing instance segmentation method and an interactive segmentation method, promising aeroengine blade instance segmentation results were obtained. First, a small amount of labeled data was used to train the instance segmentation network, which generated initial instance segmentation results of aeroengine blades. Secondly, the detected single blade instance was divided into foreground and background. By selecting foreground seed points and background seed points, the interactive segmentation method was used to generate complete segmentation results of the blade. After all the blade instances were processed in turn, the final segmentation result of engine blade instance was obtained by merging the results. All the 72 images were used to train the Sparse Instance activation map for real-time instance segmentation (SparseInst), to produce the initial instance segmentation results. The testing dataset contained 56 images. The mean Average Precision (mAP) of the proposed method is higher than that of SparseInst by 5.1 percentage points. The mAP results of the proposed method are better than those of the state-of-the-art instance segmentation methods, e.g., MASK R-CNN (Mask Region based Convolutional Neural Network), YOLACT (You Only Look At CoefficienTs), BMASK-RCNN (Boundary-preserving MASK R-CNN).

Table and Figures | Reference | Related Articles | Metrics
Indoor scene recognition method combined with object detection
XU Jianglang, LI Linyan, WAN Xinjun, HU Fuyuan
Journal of Computer Applications    2021, 41 (9): 2720-2725.   DOI: 10.11772/j.issn.1001-9081.2020111815
Abstract420)      PDF (1357KB)(337)       Save
In the method of combining Object detection Network (ObjectNet) and scene recognition network, the object features extracted by the ObjectNet and the scene features extracted by the scene network are inconsistent in dimensionality and property, and there is redundant information in the object features that affects the scene judgment, resulting in low recognition accuracy of scenes. To solve this problem, an improved indoor scene recognition method combined with object detection was proposed. First, the Class Conversion Matrix (CCM) was introduced into the ObjectNet to convert the object features output by ObjectNet, so that the dimension of the object features was consistent with that of the scene features, as a result, the information loss caused by inconsistency of the feature dimensions was reduced. Then, the Context Gating (CG) mechanism was used to suppress the redundant information in the features, reducing the weight of irrelevant information, and increasing the contribution of object features in scene recognition. The recognition accuracy of the proposed method on MIT Indoor67 dataset reaches 90.28%, which is 0.77 percentage points higher than that of Spatial-layout-maintained Object Semantics Features (SOSF) method; and the recognition accuracy of the proposed method on SUN397 dataset is 81.15%, which is 1.49 percentage points higher than that of Hierarchy of Alternating Specialists (HoAS) method. Experimental results show that the proposed method improves the accuracy of indoor scene recognition.
Reference | Related Articles | Metrics
Dynamic mapping method for heterogeneous multi-core system under thermal safety constraint
AN Xin, YANG Haijiao, LI Jianhua, REN Fuji
Journal of Computer Applications    2021, 41 (9): 2631-2638.   DOI: 10.11772/j.issn.1001-9081.2020111870
Abstract290)      PDF (1107KB)(228)       Save
The heterogeneous multi-core platform provides flexibility for system design by integrating different types of processing cores, so that applications can dynamically select different types of processing cores according to their requirements and realize efficient operation of applications. With the development of semiconductor technology, the number of integrated cores on a single chip has increased, making the modern multi-core processors have a higher power density, and this will cause the chip temperature to rise, which will eventually cause a certain negative impact on the system performance. To make the performance advantages of heterogeneous multi-core processing system fully utilized, a dynamic mapping method was proposed to maximize the performance of heterogeneous multi-core systems under the premise of satisfying temperature safe power. In this method, two heterogeneous indices of heterogeneous multi-core systems including core type and thermal susceptibility were considered to determine the mapping scheme:the first heterogeneous index is the core type. Different types of processing cores have different characteristics, so they are suitable for processing different applications. The second heterogeneous index is thermal susceptibility. Different processing core positions on the chip have different thermal susceptibility. The processing cores closer to the center receive more heat transfer from other processing cores, so that they have higher temperature. For the above, a neural network performance predictor was created to match threads to processing core types, and the Thermal Safe Power (TSP) model was used to map the matched threads to specific locations on the chip. Experimental results show that the proposed method achieves about 53% increase of the average number of instructions executed by the program in each clock cycle-Instruction Per Cycle (IPC) under the premise of ensuring thermal safety constraints compared with the common Round Robin Scheduler (RRS).
Reference | Related Articles | Metrics
Robotic grasping system based on improved single shot multibox detector algorithm
HAN Xin, YU Yongwei, DU Liuqing
Journal of Computer Applications    2020, 40 (8): 2434-2440.   DOI: 10.11772/j.issn.1001-9081.2019122234
Abstract388)      PDF (1634KB)(408)       Save
Concerning the problem that automobile part recycling factories cannot achieve accurate grasping and thus affects production efficiency due to poor part detection under actual complex working conditions, a robotic grasping system based on improved Single Shot multibox Detector (SSD) algorithm was proposed to realize the tasks of part detection, classification, location and grasping, including detection, location and grasping functions of the target parts. First, the target parts were detected by the improved SSD model, obtaining the part location and class information. Second, through Kinect camera calibration and hand-eye calibration, the pixel coordinate system was transferred into robot world coordinate system to realize the location of parts in robot spatial coordinate system. Third, the target part grasping task was completed by robot positive and inverse kinematic modeling and trajectory planning. Finally, the validation experiments of the whole integrated system on part detection, classification, location and grasping were carried out. Experimental results show that under complex working conditions, the average part grasping success rate of the proposed system reaches 95%, which meets the actual production demand of part grasping.
Reference | Related Articles | Metrics
Discrete controller synthesis based resource management method of heterogeneous multi-core processor system
AN Xin, XIA Jinwei, YANG Haijiao, OUYANG Yiming, REN Fuji
Journal of Computer Applications    2020, 40 (6): 1698-1706.   DOI: 10.11772/j.issn.1001-9081.2019101865
Abstract321)      PDF (905KB)(253)       Save
Nowadays, with the development of semiconductor technology and the requirement of the diversification of applications, heterogeneous multi-core processors have been widely used in high-performance embedded systems. How to manage and distribute the available resources (such as processing cores) during running in order to meet the requirements in performance and power consumption of the system and the applications that the system runs is a main challenge that the system focuses. However, although some mainstream resource management techniques have achieved good results in terms of performance and/or power consumption optimization, they lack the strict reliability guarantee for the resource management component. Therefore, a method based on Discrete Controller Synthesis (DCS) was proposed to automatically and reliably design the online resource management scheme for heterogeneous multi-core systems, which applies DCS (which is formal and can construct management control components automatically) to the design of online resource management components for heterogeneous multi-core systems. In this method, the heterogeneous system’s running behaviors (such as how to distribute the processing cores to the applications) were described by using the formal models, and the online resource management problem was transformed to a DCS problem aiming at one system management objective (such as maximizing system performance). On this basis, the existing DCS tools were used to illustrate and validate the proposed method, and the scalability of the DCS method was evaluated.
Reference | Related Articles | Metrics
Blockchain enhanced lightweight node model
ZHAO Yulong, NIU Baoning, LI Peng, FAN Xing
Journal of Computer Applications    2020, 40 (4): 942-946.   DOI: 10.11772/j.issn.1001-9081.2019111917
Abstract765)      PDF (632KB)(691)       Save
The inherent chain structure of blockchain means that its data volume grows linearly and endlessly. Over time,it causes a lot of pressure on the storage of the single node,which greatly wastes the storage space of the whole system. The Simplified Payment Verification(SPV)node model proposed in the Bitcoin white paper greatly reduces the node's need for storage space. However,it reduces the number of nodes and increases the pressure,which weakens the decentralization of the entire system and has security risks such as denial of service attacks and witch attacks. By analyzing the Bitcoin block data,a fully functional enhanced lightweight node model Enhanced SPV(ESPV)was proposed. The block was divided into new blocks and old blocks by ESPV,and different storage management strategies were adopted for them. The new block was saved in full copy(one copy per node)for transaction verification,allowing ESPV to has transaction verification(mining) function with less storage space cost. The old block was stored in the nodes of the network in slices,and was accessed through the hierarchical block partition routing table,thereby reducing the waste of the storage space of the system under the premise of ensuring data availability and reliability. The ESPV nodes have full node functionality,thus ensuring the decentralization of the blockchain system and enhancing the security and stability of the system. The experimental results show that the ESPV nodes have more than 80% transaction verification rate,and the data volume and growth amount of these nodes are only 10% of those of all nodes. The data availability and reliability of ESPV are guaranteed,and it is applicable to the whole life cycle of the system.
Reference | Related Articles | Metrics
Heterogeneous sensing multi-core scheduling method based on machine learning
AN Xin, KANG An, XIA Jinwei, LI Jianhua, CHEN Tian, REN Fuji
Journal of Computer Applications    2020, 40 (10): 3081-3087.   DOI: 10.11772/j.issn.1001-9081.2020010118
Abstract365)      PDF (1048KB)(763)       Save
Heterogeneous multi-core processor is the mainstream solution for modern embedded systems now. Good online mapping or scheduling approaches play important roles in improving their advantages of high performance and low power consumption. To deal with the problem of dynamic mapping and scheduling of applications on heterogeneous multi-core processing systems, a dynamic mapping and scheduling solution was proposed to effectively determine remapping time in order to maximize the system performance by using the machine learning based detection technology of quickly and accurately evaluating program performance and program behavior phase change. In this solution, by carefully selecting the static and dynamic features of processing cores and programs to running to effectively detect the difference in computing power and workload running behaviors brought by heterogeneous processing, a more accurate prediction model was built. At the same time, by introducing phase detection technology, the number of online mapping computations was reduced as much as possible, so as to provide more efficient scheduling scheme. Finally, the effectiveness of the proposed scheduling scheme was verified on the SPLASH-2 dataset. Experimental results showed that, compared to the Completely Fair Scheduler (CFS) of Linux, the proposed method achieved about 52% computing performance gains and 9.4% improvement on CPU resource utilization rate. It shows that the proposed method has excellent performance in system computing performance and processor resource utilization, and can effectively improve the dynamic mapping and scheduling effect of applications of heterogeneous multi-core systems.
Reference | Related Articles | Metrics
Performance analysis of wireless key generation with multi-bit quantization under imperfect channel estimation condition
DING Ning, GUAN Xinrong, YANG Weiwei, LI Tongkai, WANG Jianshe
Journal of Computer Applications    2020, 40 (1): 143-147.   DOI: 10.11772/j.issn.1001-9081.2019061004
Abstract346)      PDF (769KB)(261)       Save
The channel estimation error seriously affects the key generation consistency of two communicating parties in the process of wireless key generation, a multi-bit quantization wireless key generation scheme under imperfect channel estimation condition was proposed. Firstly, in order to investigate the influence of imperfect channel estimation on wireless key generation, a channel estimation error model was established. Then, a multi-bit key quantizer with guard band was designed, and the performance of wireless key was able to be improved by optimizing the quantization parameters. The closed-form expressions of Key Disagreement Rate (KDR) and Effective Key Generation Rate (EKGR) were derived, and the relationships between pilot signal power, quantization order, guard bands and the above two key generation performance indicators were revealed. The simulation results show that, increasing the transmit pilot power can effectively reduce the KDR, and with the increasing of quantization order, the key generation rate can be improved, but the KDR also increases. Moreover, increasing the quantization order and choosing the appropriate guard band size at the same time can effectively reduce KDR.
Reference | Related Articles | Metrics
Sampling awareness weighted round robin scheduling algorithm in power grid
TAN Xin, LI Xiaohui, LIU Zhenxing, DING Yuemin, ZHAO Min, WANG Qi
Journal of Computer Applications    2019, 39 (7): 2061-2064.   DOI: 10.11772/j.issn.1001-9081.2018112339
Abstract303)      PDF (636KB)(238)       Save

When the smart grid phasor measurement equipment competes for limited network communication resources, the data packets will be delayed or lost due to uneven resource allocation, which will affect the accuracy of power system state estimation. To solve this problem, a Sampling Awareness Weighted Round Robin (SAWRR) scheduling algorithm was proposed. Firstly, according to the characteristics of Phasor Measurement Unit (PMU) sampling frequency and packet size, a weight definition method based on mean square deviation of PMU traffic flow was proposed. Secondly, the corresponding iterative loop scheduling algorithm was designed for PMU sampling awareness. Finally, the algorithm was applied to the PMU sampling transmission model. The proposed algorithm was able to adaptively sense the sampling changes of PMU and adjust the transmission of data packets in time. The simulation results show that compared with original weighted round robin scheduling algorithm, SAWRR algorithm reduces the scheduling delay of PMU sampling data packet by 95%, halves the packet loss rate and increases the throughput by two times. Applying SAWRR algorithm to PMU data transmission is beneficial to ensure the stability of smart grid.

Reference | Related Articles | Metrics
Test data compatible compression method based on tri-state signal
CHEN Tian, ZUO Yongsheng, AN Xin, REN Fuji
Journal of Computer Applications    2019, 39 (6): 1863-1868.   DOI: 10.11772/j.issn.1001-9081.2018112334
Abstract393)      PDF (942KB)(243)       Save
Focusing on the increasing amount of test data in the development of Very Large Scale Integration (VLSI), a test data compression method based on tri-state signal was proposed. Firstly, the test set was optimized and pre-processed by performing partial input reduction and test vector reordering operations, improving the compatibility among test patterns while increasing the proportion of don't-care bit X in the test set. Then, the coding compression of tri-state signal was performed to the pre-processed test set, so that the test set was divided into multiple scan slices by using the characteristics of tri-state signal, and the tri-state signal was used to perform compatible coding compression on the scann slices. With various test rules considered, the test set compression ratio was improved. The experimental results show that, compared with the similar compression methods, the proposed method achieves a higher compression ratio, and the average test compression ratio reaches 76.17% without significant increase of test power and area overhead.
Reference | Related Articles | Metrics
Quantitative analysis of physical secret key in OFDM channel based on universal software radio peripheral
DING Ning, GUAN Xinrong, YANG Weiwei
Journal of Computer Applications    2019, 39 (6): 1780-1785.   DOI: 10.11772/j.issn.1001-9081.2018102120
Abstract408)      PDF (990KB)(183)       Save
In order to compare and analyze the performance of single threshold quantization algorithm and double thresholds quantization algorithm on measured data and improve the performance of physical secret key by optimizing the quantization parameters, an Orthogonal Frequency Division Multiplexing (OFDM) system was built by Universal Software Radio Peripheral (USRP). The channel amplitude feature was extracted as the key source through channel estimation and the performance of the two quantization algorithms was analyzed in terms of consistency, randomness and residual length of secret key. The simulation results of consistency, randomness and residual length of secret key under single threshold quantization and double thresholds quantization were obtained based on measured data. The results show that single threshold quantization algorithm has the optimal quantization threshold to minimize the key inconsistency rate under the given key randomness constraint, double thresholds quantization algorithm has the optimal quantization factor to maximize the effective secret key length, and when Cascade key negotiation algorithm is used for negotiation, there is a trade-off relation between secret key consistency and secret key generation rate in different quantization algorithms.
Reference | Related Articles | Metrics
Machine learning based online mapping approach for heterogeneous multi-core processor system
AN Xin, ZHANG Ying, KANG An, CHEN Tian, LI Jianhua
Journal of Computer Applications    2019, 39 (6): 1753-1759.   DOI: 10.11772/j.issn.1001-9081.2018112311
Abstract391)      PDF (1164KB)(261)       Save
Heterogeneous Multi-core Processors (HMPs) platform has become the mainstream solution for modern embedded system design, and online mapping or scheduling plays a vital role in making full use of the advantages of high performance and low power consumption. Aiming at the dynamic mapping problem of application tasks in HMPs, a mapping and scheduling approach based on machine learning prediction model was proposed. On the one hand, a machine learning model was constructed to predict and evaluate the performance of different mapping strategies rapidly and efficiently, so as to provide support for online scheduling. On the other hand, the machine learning model was integrated with genetic algorithm to find out the optimal resource allocation strategy efficiently. Finally, an Motion-Join Photographic Experts Group (M-JPEG) decoder was used to verify the effectiveness of the proposed approach. The experimental results show that, compared with the Round Robin Scheduler (RRS) and sampling scheduling approaches, the proposed online mapping/scheduling approach has reduced the average execution time by about 19% and 28% respectively.
Reference | Related Articles | Metrics
Hybrid defect prediction model based on network representation learning
LIU Chengbin, ZHENG Wei, FAN Xin, YANG Fengyu
Journal of Computer Applications    2019, 39 (12): 3633-3638.   DOI: 10.11772/j.issn.1001-9081.2019061028
Abstract328)      PDF (946KB)(237)       Save
Aiming at the problem of the dependence between software system modules, a hybrid defect prediction model based on network representation learning was constructed by analyzing the network structure of software system. Firstly, the software system was converted into a software network on a module-by-module basis. Then, network representation technique was used to perform the unsupervised learning on the system structural feature of each module in software network. Finally, the system structural features and the semantic features learned by the convolutional neural network were combined to construct a hybrid defect prediction model. The experimental results show that the hybrid defect prediction model has better defect prediction effects in three open source softwares, poi, lucene and synapse of Apache, and its F1 index is respectively 3.8%, 1.0%, 4.1% higher than that of the optimal model based on Convolutional Neural Network (CNN). Software network structure feature analysis provides an effective research thought for the construction of defect prediction model.
Reference | Related Articles | Metrics
Parallel test scheduling optimization method for three-dimensional chip with multi-core and multi-layer
CHEN Tian, WANG Jiawei, AN Xin, REN Fuji
Journal of Computer Applications    2018, 38 (6): 1795-1800.   DOI: 10.11772/j.issn.1001-9081.2017123002
Abstract443)      PDF (1090KB)(307)       Save
In order to solve the problem of high cost of chip testing in the process of Three-Dimensional (3D) chip manufacturing, a new scheduling method based on Time Division Multiplexing (TDM) was proposed to optimize the testing resources between layers, layer and core cooperatively. Firstly, the shift registers were arranged on each layer of 3D chip, and the testing frequency was divided properly between the layers and cores of the same layer under the control of shift register group on input data, so that the cores in different locations could be tested in parallel. Secondly, greedy algorithm was used to optimize the allocation of registers for reducing the free test cycles of core parallel test. Finally, Discrete Binary Particle Swarm Optimization (DBPSO) algorithm was used to find out the best 3D stack layout, so that the transmission potential of the Through Silicon Via (TSV) could be adequately used to improve the parallel testing efficiency and reduce the testing time. The experimental results show that, under the power constraints, the utilization rate of the optimized whole Test Access Mechanism (TAM) is increased by an average of 16.28%, and the testing time of the optimized 3D stack is reduced by an average of 13.98%. The proposed method can decrease the time and reduce the cost of testing.
Reference | Related Articles | Metrics
Enhanced algorithm of image super-resolution based on dual-channel convolutional neural networks
JIA Kai, DUAN Xintao, LI Baoxia, GUO Daidou
Journal of Computer Applications    2018, 38 (12): 3563-3569.   DOI: 10.11772/j.issn.1001-9081.2018040820
Abstract753)      PDF (1211KB)(465)       Save
The single-channel image super-resolution method can not achieve both fast convergence and high quality texture detail restoration. In order to solve the problem, a new Enhanced algorithm of image Super-Resolution based on Dual-channel Convolutional neural network (EDCSR) was proposed. Firstly, the network was divided into deep channel and shallow channel. Deep channel was used to extract detailed texture information of images, and shallow channel was mainly used to restore the overall contour of images. Then, the advantages of residual learning were used by the deep channel to deepen network and reduce parameters of model, eliminate the network degradation problem caused by too deep network. The long and short-term memory blocks were constructed to eliminate the artifacts and noise caused by the deconvolution layer. The texture information of image at different scales were extracted by a multi-scale method, while the shallow channel only needed to be responsible for restoring the main contour of image. Finally, the dual-channel losses were integrated to optimize the network continuously, which guided the network to generate high-resolution images. The experimental results show that, compared with the End-to-End image super-resolution algorithm via Deep and Shallow convolutional networks (EEDS), the proposed algorithm converges more quickly, image edge and texture reconstruction effects are significantly improved, the Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) are improved averagely by 0.15 dB and 0.0031 on data set Set5, while these are improved averagely by 0.18 dB and 0.0035 on data set Set14.
Reference | Related Articles | Metrics
Plant image recoginiton based on family priority strategy
CAO Xiangying, SUN Weimin, ZHU Youxiang, QIAN Xin, LI Xiaoyu, YE Ning
Journal of Computer Applications    2018, 38 (11): 3241-3245.   DOI: 10.11772/j.issn.1001-9081.2018041309
Abstract678)      PDF (819KB)(576)       Save
Plant recognition includes two kinds of tasks:specimen recognition and real-environment recognition. Due to the existence of background noise, real-environment plant image recognition is more difficult. To reduce the weight of Convolutional Neural Networks (CNN), to improve over-fitting, to improve the recognition rate and generalization ability, a method of plant identification with Family Priority (FP) was proposed. Combined with the lightweight CNN MobileNet model, a plant recognition model Family Priority MobileNet (FP-MobileNet) was established by means of migration learning. On the single background plant dataset flavia, the MobileNet model achieved 99.8% of accuracy. For the more challenging real-environment flower dataset flower102, when the number of samples in the training set was greater than that in the test set FP-MobileNet achieved 99.56% of accuracy. When the number of samples in the training set was smaller than that in the test set, FP-MobileNet still obtained 95.56% of accuracy. The experimental results show that the accuracies of FP-MobileNet under two different data set partitioning schemes are both higher than those of the pure MobileNet model. In addition, FP-MobileNet weighs only occupy 13.7 MB with high recognition rate. It takes into account both accuracy and delay, and is suitable for promotion to mobile devices that require a lightweight model.
Reference | Related Articles | Metrics
Clustering algorithm of time series with optimal u-shapelets
YU Siqin, YAN Qiuyan, YAN Xinming
Journal of Computer Applications    2017, 37 (8): 2349-2356.   DOI: 10.11772/j.issn.1001-9081.2017.08.2349
Abstract717)      PDF (1191KB)(753)       Save
Focusing on low quality of u-shapelets (unsupervised shapelets) in time series clustering based on u-shapelets, a time series clustering method based on optimal u-shapelets named DivUshapCluster was proposed. Firstly, the influence of different subsequence quality assessment methods on time series clustering results based on u-shapelets was discussed. Secondly, the selected best subsequence quality assessment method was used to evaluate the quality of the u-shapelet candidates. Then, the diversified top- k query technology was used to remove redundant u-shapelets from the u-shapelet candidates and select the optimal u-shapelets. Finally, the optimal u-shapelets set was used to transform the original dataset, so as to improve the accuracy of the time series clustering. The experimental results show that the DivUshapCluster method is superior to the traditional time series clustering methods in terms of clustering accuracy. Compared with the BruteForce method and the SUSh method, the average clustering accuracy of DivUshapCluster method is increased by 18.80% and 19.38% on 22 datasets, respectively. The proposed method can effectively improve the clustering accuracy of time series in the case of ensuring the overall efficiency.
Reference | Related Articles | Metrics
Shapelet classification method based on trend feature representation
YAN Xinming, MENG Fanrong, YAN Qiuyan
Journal of Computer Applications    2017, 37 (8): 2343-2348.   DOI: 10.11772/j.issn.1001-9081.2017.08.2343
Abstract877)      PDF (1058KB)(1035)       Save
Shapelet is a kind of recognizable time series sub-sequence, by identifying the local characteristics to achieve the purpose of accurate classification of time series. The original shapelet discovery algorithm has low efficiency, and much work has focused on improving the efficiency of shapelet discovery. However, for the time series with trend change, the typical time series representation is used for shapelet discovery, which tends to cause the loss of trend information in the sequence. In order to solve this problem, a new trend-based diversified top- k shapelet classification method was proposed. Firstly, the method of trend feature symbolization was used to represent the trend information of time series. Then, the shapelet candidate set was obtained according to the trend signature of the sequence. Finally, the most representative k shapelets were selected from the candidate set by introducing the diversifying top- k query algorithm. Experimental results of time series classification show that compared with the traditional classification algorithms, the accuracy of the proposed method was improved on 11 experimental data sets; compared with FastShapelet algorithm, the efficiency was improved, the running time of the proposed method was shortened, specially for the data with obvious trend information. The experimental results indicate that the proposed method can effectively improve the accuracy and the effciency of time series classification.
Reference | Related Articles | Metrics
Path planning for restaurant service robot based on improved genetic algorithm
XU Lin, FAN Xinwei
Journal of Computer Applications    2017, 37 (7): 1967-1971.   DOI: 10.11772/j.issn.1001-9081.2017.07.1967
Abstract626)      PDF (808KB)(470)       Save
Since the Genetic Algorithm (GA) is easy to produce premature phenomenon and has slow convergence rate, an improved GA based on Traditional GA (TGA), called HLGA (Halton-Levenshtein Genetic Algorithm), was proposed for path planning of real restaurant service robots. Firstly, the similarity method based on edit distance optimized the initial population of quasi-random sequence; secondly, the improved crossover probability and mutation probability adjustment formula based on the adaptive algorithm were adopted to cross and mutate the individuals after they had been selected. Finally, the individual fitness values of the safety evaluation factor functions were calculated, and the global optimal solution was obtained by comparison and iteration. Through theoretical analysis and Matlab simulation, the running time of HLGA was decreased by 6.92 seconds and 1.79 seconds compared with TGA and Adaptive Genetic Algorithm based on Improved independent Similarity (ISAGA), and the actual path of planning was more secure and smooth. The simulation results show that HLGA can effectively improve the quality of path planning in practical applications, meanwhile reduces the searching space and the planning time.
Reference | Related Articles | Metrics
Dynamic resource allocation strategy in Spark Streaming
LIU Bei, TAN Xinming, CAO Wenbin
Journal of Computer Applications    2017, 37 (6): 1574-1579.   DOI: 10.11772/j.issn.1001-9081.2017.06.1574
Abstract795)      PDF (982KB)(561)       Save
The existing resource allocation strategy has long resource adjustment cycle and cannot sufficiently meet the individual needs of different applications and users when Spark Streaming is selected as stream processing component in hybrid large-scale computing platform. In order to solve the problems, a Dynamic Resource Allocation strategy for Multi-application (DRAM) was proposed. The global variables were added to control the dynamic resource allocation process in DRAM. Firstly, the historical data feedback and the global variables were obtained. Then, whether increasing or decreasing the number of resources in each application was determined. Finally, the increase or decrease of resources was implemented. The experimental results show that, the proposed strategy can effectively adjust the resource quota, and reduce the processing delay compared with the original Spark platform strategies such as Streaming and Core under both cases of the stable data stream and the unstable data stream. The proposed strategy can also improve the utilization rate of the cluster resources.
Reference | Related Articles | Metrics
Diversified top- k shapelets transform for time series classification
SUN Qifa, YAN Qiuyan, YAN Xinming
Journal of Computer Applications    2017, 37 (2): 335-340.   DOI: 10.11772/j.issn.1001-9081.2017.02.0335
Abstract711)      PDF (920KB)(584)       Save

Focusing on the issue that shapelets candidates can be very similar in time series classification by shapelets transform, a diversified top-k shapelets transform method named DivTopKShapelet was proposed. In DivTopKShapelet, the diversified top-k query method was used to filter similar shapelets and select the k most representative shapelets. Then the optimal shapelets was used to transform data, so as to improve the accuracy and time efficiency of typical time series classification method. Experimental results show that compared with clustering based shapelets classification method (ClusterShapelet) and coverage based shapelets classification method (ShapeletSelction), DivTopKShapelet method can not only improve the traditional time series classification method, but also increase the accuracy by 48.43% and 32.61% at most; at the same time, the proposed method can enhance the computational efficiency in 15 data sets, which is at least 1.09 times and at most 287.8 times.

Reference | Related Articles | Metrics
Forest fire image segmentation algorithm with adaptive threshold based on smooth spline function
YANG Xubing, TAN Xinyi, ZHANG Fuquan
Journal of Computer Applications    2017, 37 (11): 3157-3161.   DOI: 10.11772/j.issn.1001-9081.2017.11.3157
Abstract472)      PDF (923KB)(409)       Save
Based on smooth spline principle, a self-adaptive multi-threshold segmentation algorithm HistSplineReg (Spline Regression for Histogram) was proposed. HistSplineReg is a two-step method. Firstly, a smoothing spline function was regressed to fit the one-dimensional image histogram, and then the extreme value was found by the regression function to achieve multi-threshold automatic segmentation of the image. Compared to the existing multi-threshold methods, the advantages of HistSplineReg lie in 5 aspects:1) it is quite consistent with the human intuition; 2) it is constructed on the smoothing spline, which is a solid mathematic basis; 3) both the number and the size of multiple thresholds can be automatically determined; 4) HistSplineReg can be analytically solved, and its computing burden is mainly concentrated on Cholesky decomposition of the matrix, while the size of matrix depends on the pixel level of the image, rather than the scale of the image; 5) it has only one trade-off parameter to balance the empirical error and regressor's smoothness. Furthermore, for the forest fire recognition task, an experimental reference value was provided. Finally, experiments were conducted on some digital forest fire images in the RGB (Red, Green, Blue) mode. The experimental results show that the histSplineReg method is more effective than Support Vector Regression (SVR) and Polynomial Fitting (PolyFit), which is based on the grayscale image, the color channel, the color image synthesized by each channel segmentation. And the three methods all reflect the red channel information is most significant to the forest fire image segmentation effect.
Reference | Related Articles | Metrics
Efficient public auditing scheme for cloud storage supporting user revocability with proxy re-signature scheme
ZHANG Xinpeng, XU Chunxiang, ZHANG Xinyan, SAI Wei, HAN Xingyang, LIU Guoping
Journal of Computer Applications    2016, 36 (7): 1816-1821.   DOI: 10.11772/j.issn.1001-9081.2016.07.1816
Abstract480)      PDF (927KB)(331)       Save
Due to user revocability, the new data manager needs to verify the integrity of the former data manager's management data stored in the cloud server, which is obviously inevitable in reality. In order to solve this issue, an efficient privacy-preserving public auditing scheme for cloud storage scheme was proposed. Firstly, in the proposed scheme based on unidirectional proxy re-signature, the proxy re-signature key was generated by the current data manager's private key and the former public key, which did not leak any information, to realize transferring of ownership data caused by the users revocability securely. Secondly, it was proved that the proposed scheme could protect any malicious cloud server from generating the forged response proof which could pass the verification to cheat the Third Party Auditor (TPA). Moreover, the random masking technique was employed to prevent the curious TPA from revealing the primitive data blocks. Compared with the Padna scheme, even though the proposed scheme adds the new functions but its communication overhead in the process of auditing and computational cost are also lower than Panda's.
Reference | Related Articles | Metrics
Optimization between multiple input multiple output radar signal and target interference based on Stackelberg game
LAN Xing, WANG Xingliang, LI Wei, WU Haotian, JIANG Mengran
Journal of Computer Applications    2015, 35 (4): 1185-1189.   DOI: 10.11772/j.issn.1001-9081.2015.04.1185
Abstract524)      PDF (677KB)(27397)       Save

To solve the problem of the game of detection and stealth in the presence of clutter between Multiple Input Multiple Output (MIMO) radar and target, a new two-step water-filling was proposed. Firstly, space-time coding model was built. Then based on mutual information, water-filling was applied to distribute target interference power, and generalized water-filling was applied to distribute radar signal power. Lastly, optimization schemes in Stackelberg game of target dominant and radar dominant were achieved under strong and weak clutter. The simulation results indicate that both radar signal power allocation and trend of generalized water-filling level are affected by clutter, therefore two optimization schemes' mutual information in strong clutter environment is about half and interference factor decreases 0.2 and 0.25 separately, mutual information is less sensitive to interference. The availability of the proposed algorithm is proved.

Reference | Related Articles | Metrics
Trajectory segment-based abnormal behavior detection method using LDA model
ZHENG Bingbin, FAN Xinnan, LI Min, ZHANG Ji
Journal of Computer Applications    2015, 35 (2): 515-518.   DOI: 10.11772/j.issn.1001-9081.2015.02.0515
Abstract692)      PDF (830KB)(485)       Save

Most of the current trajectory-based abnormal behavior detection algorithms do not consider the internal information of the trajectory, which might lead to a high false alarm rate. An abnormal behavior detection method based on trajectory segment using the topic model was presented. Firstly, the original trajectories were partitioned into trajectory segments according to turning angles. Secondly, the behavior characteristic information was extracted by quantifying the observations from these segments into different visual words. Then the time-space relationship among the trajectories was explored by Latent Dirichlet Allocation (LDA) model. Finally, the behavior pattern analysis and the abnormal behavior detection could be implemented by learning the corresponding generative topic model combined with the Bayesian theory. Simulation experiments of behavior pattern analysis and abnormal behavior detection were conducted on two video scenes, and different kinds of abnormal behavior patterns were detected. The experimental results show that, combining with trajectory segmentation, the proposed method can dig the internal behavior characteristic information to identify a variety of abnormal behavior patterns and improve the accuracy of abnormal behavior detection.

Reference | Related Articles | Metrics
Blind image forensics based on JPEG double quantization effect
DUAN Xintao, PENG Tao, LI Feifei, WANG Jingjuan
Journal of Computer Applications    2015, 35 (11): 3198-3202.   DOI: 10.11772/j.issn.1001-9081.2015.11.3198
Abstract625)      PDF (798KB)(518)       Save
The double quantization effect of JPEG (Joint Photographic Experts Group) provides important clues for detecting image tampering. When an original JPEG image undergoes localized tampering and is saved again in JPEG format, the Discrete Consine Transform (DCT) coefficients of untampered regions would undergo double JPEG compressing, while the DCT coefficients of tampered regions would only undergo a single compression. The Alternating Current (AC) coefficient distribution accords with a Laplace probability density distribution described with a suitable parameter. And on this basis, this paper proposed a new double compression probability model of JPEG image to describe the change of DCT coefficients after the double compression, and combined the Bayes criterion to express the eigenvalues of the image blocks which have undergone the single and double JPEG compression. A threshold was set for the eigenvalues. Then the tampered region was automatically detected and extracted by using the threshold to classify the eigenvalues. The experimental results show that the method can detect and locate the tamped area effectively and it outperforms in terms of the detection result compared with the blind detection algorithm of composite images by measuring inconsistencies of JPEG blocking artifact and image forgery detection algorithm based on quantization table especially when the second compression factor is smaller than the first one.
Reference | Related Articles | Metrics
Time series outlier detection based on sliding window prediction
YU Yufeng ZHU Yuelong WAN Dingsheng GUAN Xingzhong
Journal of Computer Applications    2014, 34 (8): 2217-2220.   DOI: 10.11772/j.issn.1001-9081.2014.08.2217
Abstract301)      PDF (791KB)(905)       Save

To solve data quality problems for hydrological time series analysis and decision-making, a new prediction-based outlier detection algorithm was proposed. The method first split given hydrological time series into subsequences so as to build a forecasting model to predict future values, and then outliers were assumed to take place if the difference between predicted and observed values was above a certain threshold. The setup of sliding window and parameters in the detection algorithm were analyzed, and the corresponding result was validated with the real data. The experimental results show that the proposed algorithm can effectively detect the outliers in time series and improves the sensitivity and specificity to at least 80 percent and 98 percent respectively.

Reference | Related Articles | Metrics
Cloud framework for hierarchical batch-factor algorithm
YUAN Xinhui LIU Yong QI Fengbin
Journal of Computer Applications    2014, 34 (3): 690-694.   DOI: 10.11772/j.issn.1001-9081.2014.03.0690
Abstract486)      PDF (1002KB)(333)       Save

Bernstein’s Batch-factor algorithm can test B-smoothness of a lot of integers in a short time. But this method costs so much memory that it’s widely used in theory analyses but rarely used in practice. Based on splitting product of primes into pieces, a hierarchical batch-factor algorithm cloud framework was proposed to solve this problem. This hierarchical framework made the development clear and easy, and could be easily moved to other architectures; Cloud computing framework borrowed from MapReduce made use of services provided by cloud clients such as distribute memory, share memory and message to carry out mapping of splitting-primes batch factor algorithm, which solved the great cost of Bernstein’s method. Experiments show that, this framework is with good scalability and can be adapted to different sizes batch factor in which the scale of prime product varies from 1.5GB to 192GB, which enhances the usefulness of the algorithm significantly.

Related Articles | Metrics
Survey on energy-aware green databases
JIN Peiquan XING Baoping JIN Yong YUE Lihua
Journal of Computer Applications    2014, 34 (1): 46-53.   DOI: 10.11772/j.issn.1001-9081.2014.01.0046
Abstract521)      PDF (1418KB)(498)       Save
With the trend of global low-carbon, as well as data-centric computing trends, studying the energy-saving green database systems has become a hot issue of government, business and academia. However, traditional database systems mainly focus on performance, and have little consideration on energy metrics, including energy efficiency and energy proportionality. In this paper, based on the requirement analysis on green database systems, some key issues on this topic were explored, and two critical problems were emphasized, namely the energy efficiency problem for database systems, as well as the energy proportionality problem for database clusters. Furthermore, some future directions on energy-aware green database systems were pointed out to provide some new insights for the research into this new area.
Related Articles | Metrics
Approach of large matrix multiplication based on Hadoop
SHUN Yuanshuai CHEN Yao GUAN Xinjun LIN Chen
Journal of Computer Applications    2013, 33 (12): 3339-3344.  
Abstract819)      PDF (1071KB)(625)       Save
Large and very large matrix cannot be dealt by current matrix multiplication algorithms. With the development of MapReduce programming frame, parallel programs have become the main approaches for matrix computing. The matrix multiplication algorithms based on MapReduce were summarized, and an improved strategy for large matrix was proposed, which had a tradeoff in the data volume between the computation on single work node and the network transmission. The experimental results prove that the parallel algorithms outperform the traditional ones on the large matrix, and the performance will improve with the increase of the clusters.
Related Articles | Metrics